4 research outputs found

    Explainable Intrusion Detection Systems using white box techniques

    Get PDF
    Artificial Intelligence (AI) has found increasing application in various domains, revolutionizing problem-solving and data analysis. However, in decision-sensitive areas like Intrusion Detection Systems (IDS), trust and reliability are vital, posing challenges for traditional black box AI systems. These black box IDS, while accurate, lack transparency, making it difficult to understand the reasons behind their decisions. This dissertation explores the concept of eXplainable Intrusion Detection Systems (X-IDS), addressing the issue of trust in X-IDS. It explores the limitations of common black box IDS and the complexities of explainability methods, leading to the fundamental question of trusting explanations generated by black box explainer modules. To address these challenges, this dissertation presents the concept of white box explanations, which are innately explainable. While white box algorithms are typically simpler and more interpretable, they often sacrifice accuracy. However, this work utilized white box Competitive Learning (CL), which can achieve competitive accuracy in comparison to black box IDS. We introduce Rule Extraction (RE) as another white box technique that can be applied to explain black box IDS. It involves training decision trees on the inputs, weights, and outputs of black box models, resulting in human-readable rulesets that serve as global model explanations. These white box techniques offer the benefits of accuracy and trustworthiness, which are challenging to achieve simultaneously. This work aims to address gaps in the existing literature, including the need for highly accurate white box IDS, a methodology for understanding explanations, small testing datasets, and comparisons between white box and black box models. To achieve these goals, the study employs CL and eclectic RE algorithms. CL models offer innate explainability and high accuracy in IDS applications, while eclectic RE enhances trustworthiness. The contributions of this dissertation include a novel X-IDS architecture featuring Self-Organizing Map (SOM) models that adhere to DARPA’s guidelines for explainable systems, an extended X-IDS architecture incorporating three CL-based algorithms, and a hybrid X-IDS architecture combining a Deep Neural Network (DNN) predictor with a white box eclectic RE explainer. These architectures create more explainable, trustworthy, and accurate X-IDS systems, paving the way for enhanced AI solutions in decision-sensitive domains

    Rorty’s Social Theory and the Narrative of U.S. History Curriculum

    Get PDF
    This paper explores the implications for creating a U.S. history narrative from a Rortyan perspective. First, we review Rorty’s social theory. Second, we discuss implications of his ideas regarding the creation of a U.S. history narrative based upon his ideas. Finally, we examine two concerns that would likely emerge if a Rortyan U.S. history curriculum was taught in our public schools

    Creating an Explainable Intrusion Detection System Using Self Organizing Maps

    Full text link
    Modern Artificial Intelligence (AI) enabled Intrusion Detection Systems (IDS) are complex black boxes. This means that a security analyst will have little to no explanation or clarification on why an IDS model made a particular prediction. A potential solution to this problem is to research and develop Explainable Intrusion Detection Systems (X-IDS) based on current capabilities in Explainable Artificial Intelligence (XAI). In this paper, we create a Self Organizing Maps (SOMs) based X-IDS system that is capable of producing explanatory visualizations. We leverage SOM's explainability to create both global and local explanations. An analyst can use global explanations to get a general idea of how a particular IDS model computes predictions. Local explanations are generated for individual datapoints to explain why a certain prediction value was computed. Furthermore, our SOM based X-IDS was evaluated on both explanation generation and traditional accuracy tests using the NSL-KDD and the CIC-IDS-2017 datasets

    A Performance-Oriented Comparison of Neural Network Approaches for Anomaly-based Intrusion Detection

    No full text
    Intrusion Detection Systems employ anomaly detection algorithms to detect malicious or unauthorized activities in real time. Anomaly detection algorithms that exploit artificial neural networks (ANN) have recently gained particular interest. These algorithms are usually evaluated and compared through effectiveness measures, which aim to quantify how well anomalies are identified based on detection capabilities. However, to the best of our knowledge, the performance characterization from the perspective of computational cost and space, training time, memory consumption, together with a quantitative analysis of the trade-offs between algorithm effectiveness and performance, have not been explored yet. In this work, we select four recently proposed unsupervised anomaly detection algorithms based on ANN, namely: REPresentations for a random nEarest Neighbor (REPEN), DevNet, OmniAnomaly, Multi-Objective Generative Adversarial Active Learning (MO-GAAL); we perform a variety of experiments to evaluate the trade-offs between the effectiveness and performance of the selected algorithms using two reference dataset: NSL-KDD and CIC-IDS-2017. Our results confirm the importance of this study, showing that none of the selected algorithms dominate the others in terms of both, effectiveness and performance. Furthermore, it shows that approaches based on Recurrent Neural Networks, which exploit the temporal dependency of the samples, have a clear advantage over the others in terms of effectiveness, while exhibiting the worst execution time
    corecore